387 research outputs found
Tars: Timeliness-aware Adaptive Replica Selection for Key-Value Stores
In current large-scale distributed key-value stores, a single end-user
request may lead to key-value access across tens or hundreds of servers. The
tail latency of these key-value accesses is crucial to the user experience and
greatly impacts the revenue. To cut the tail latency, it is crucial for clients
to choose the fastest replica server as much as possible for the service of
each key-value access. Aware of the challenges on the time varying performance
across servers and the herd behaviors, an adaptive replica selection scheme C3
is proposed recently. In C3, feedback from individual servers is brought into
replica ranking to reflect the time-varying performance of servers, and the
distributed rate control and backpressure mechanism is invented. Despite of
C3's good performance, we reveal the timeliness issue of C3, which has large
impacts on both the replica ranking and the rate control, and propose the Tars
(timeliness-aware adaptive replica selection) scheme. Following the same
framework as C3, Tars improves the replica ranking by taking the timeliness of
the feedback information into consideration, as well as revises the rate
control of C3. Simulation results confirm that Tars outperforms C3.Comment: 10pages,submitted to ICDCS 201
Automated Identication of Atrial Fibrillation from Single-lead ECGs Using Multi-branching ResNet
Atrial fibrillation (AF) is the most common cardiac arrhythmia, which is
clinically identified with irregular and rapid heartbeat rhythm. AF puts a
patient at risk of forming blood clots, which can eventually lead to heart
failure, stroke, or even sudden death. It is of critical importance to develop
an advanced analytical model that can effectively interpret the
electrocardiography (ECG) signals and provide decision support for accurate AF
diagnostics. In this paper, we propose an innovative deep-learning method for
automated AF identification from single-lead ECGs. We first engage the
continuous wavelet transform (CWT) to extract time-frequency features from ECG
signals. Then, we develop a convolutional neural network (CNN) structure that
incorporates ResNet for effective network training and multi-branching
architectures for addressing the imbalanced data issue to process the 2D
time-frequency features for AF classification. We evaluate the proposed
methodology using two real-world ECG databases. The experimental results show a
superior performance of our method compared with traditional deep learning
models
Deep Descriptor Transforming for Image Co-Localization
Reusable model design becomes desirable with the rapid expansion of machine
learning applications. In this paper, we focus on the reusability of
pre-trained deep convolutional models. Specifically, different from treating
pre-trained models as feature extractors, we reveal more treasures beneath
convolutional layers, i.e., the convolutional activations could act as a
detector for the common object in the image co-localization problem. We propose
a simple but effective method, named Deep Descriptor Transforming (DDT), for
evaluating the correlations of descriptors and then obtaining the
category-consistent regions, which can accurately locate the common object in a
set of images. Empirical studies validate the effectiveness of the proposed DDT
method. On benchmark image co-localization datasets, DDT consistently
outperforms existing state-of-the-art methods by a large margin. Moreover, DDT
also demonstrates good generalization ability for unseen categories and
robustness for dealing with noisy data.Comment: Accepted by IJCAI 201
An experimental investigation of supercritical CO2 accidental release from a pressurized pipeline
Experiments at laboratory scales have been conducted to investigate the behavior of the release of supercritical CO2 from pipelines including the rapid depressurization process and jet flow phenomena at different sizes of the leakage nozzle. The dry ice bank formed near the leakage nozzle is affected by the size of the leakage nozzle. The local Nusselt numbers at the leakage nozzle are calculated and the data indicate enhanced convective heat transfer for larger leakage holes. The mass outflow rates for different sizes of leakage holes are obtained and compared with two typical accidental gas release mathematical models. The results show that the “hole model” has a better prediction than the “modified model” for small leakage holes. The experiments provide fundamental data for the CO2 supercritical-gas multiphase flows in the leakage process, which can be used to guide the development of the leakage detection technology and risk assessment for the CO2 pipeline transportation
A modelling study of the multiphase leakage flow from pressurised CO2 pipeline
The accidental leakage is one of the main risks during the pipeline transportation of high pressure CO2. The decompression process of high pressure CO2 involves complex phase transition and large variations of the pressure and temperature fields. A mathematical method based on the homogeneous equilibrium mixture assumption is presented for simulating the leakage flow through a nozzle in a pressurised CO2 pipeline. The decompression process is represented by two sub-models: the flow in the pipe is represented by the blowdown model, while the leakage flow through the nozzle is calculated with the capillary tube assumption. In the simulation, two kinds of real gas equations of state were employed in this model instead of the ideal gas equation of state. Moreover, results of the flow through the nozzle and measurement data obtained from laboratory experiments of pressurised CO2 pipeline leakage were compared for the purpose of validation. The thermodynamic processes of the fluid both in the pipeline and the nozzle were described and analysed
No-go Theorem for One-way Quantum Computing on Naturally Occurring Two-level Systems
One-way quantum computing achieves the full power of quantum computation by
performing single particle measurements on some many-body entangled state,
known as the resource state. As single particle measurements are relatively
easy to implement, the preparation of the resource state becomes a crucial
task. An appealing approach is simply to cool a strongly correlated quantum
many-body system to its ground state. In addition to requiring the ground state
of the system to be universal for one-way quantum computing, we also want the
Hamiltonian to have non-degenerate ground state protected by a fixed energy
gap, to involve only two-body interactions, and to be frustration-free so that
measurements in the course of the computation leave the remaining particles in
the ground space. Recently, significant efforts have been made to the search of
resource states that appear naturally as ground states in spin lattice systems.
The approach is proved to be successful in spin-5/2 and spin-3/2 systems. Yet,
it remains an open question whether there could be such a natural resource
state in a spin-1/2, i.e., qubit system. Here, we give a negative answer to
this question by proving that it is impossible for a genuinely entangled qubit
states to be a non-degenerate ground state of any two-body frustration-free
Hamiltonian. What is more, we prove that every spin-1/2 frustration-free
Hamiltonian with two-body interaction always has a ground state that is a
product of single- or two-qubit states, a stronger result that is interesting
independent of the context of one-way quantum computing.Comment: 5 pages, 1 figur
Towards high-throughput microstructure simulation in compositionally complex alloys via machine learning
The coupling of computational thermodynamics and kinetics has been the central research theme in Integrated Computational Material Engineering (ICME). Two major bottlenecks in implementing this coupling and performing efficient ICME-guided high-throughput multi-component industrial alloys discovery or process parameters optimization, are slow responses in kinetic calculations to a given set of compositions and processing conditions and the quality of a large amount of calculated thermodynamic data. Here, we employ machine learning techniques to eliminate them, including (1) intelligent corrupt data detection and re-interpolation (i.e. data purge/cleaning) to a big tabulated thermodynamic dataset based on an unsupervised learning algorithm and (2) parameterization via artificial neural networks of the purged big thermodynamic dataset into a non-linear equation consisting of base functions and parameterization coefficients. The two techniques enable the efficient linkage of high-quality data with a previously developed microstructure model. This proposed approach not only improves the model performance by eliminating the interference of the corrupt data and stability due to the boundedness and continuity of the obtained non-linear equation but also dramatically reduces the running time and demand for computer physical memory simultaneously. The high computational robustness, efficiency, and accuracy, which are prerequisites for high-throughput computing, are verified by a series of case studies on multi-component aluminum, steel, and high-entropy alloys. The proposed data purge and parameterization methods are expected to apply to various microstructure simulation approaches or to bridging the multi-scale simulation where handling a large amount of input data is required. It is concluded that machine learning is a valuable tool in fueling the development of ICME and high throughput materials simulations.publishedVersio
- …